我们仔细比较了两种无模型控制算法,演进策略和近端政策优化(PPO),具有后退地平线模型预测控制(MPC),用于操作模拟,价格响应式热水器。考虑了四个MPC变体:单次控制器,具有完美预测产生最佳控制;一个有限的地平控制器,具有完美预测;基于平均的预测控制器;使用历史情景,一个两阶段随机编程控制器。在所有情况下,水温和电价的MPC模型精确;只有水需求不确定。为了比较,ES和PPO通过在MPC使用的相同场景下直接与模拟环境直接交互来学习基于神经网络的策略。然后在需求时间序列的单独一周继续的单独一周内进行评估所有方法。我们证明了对这个问题的最佳控制是具有挑战性的,需要超过8小时的MPC寻找,具有完美预测来获得最低成本。尽管存在这一挑战,但ES和PPO都学会了在平均成本方面优于平均预测和两级随机MPC控制器的良好通用政策,并且在计算动作时速度越来越多的数量级。我们表明ES尤其可以利用并行性,使用1150 CPU核心在90秒内学习策略。
translated by 谷歌翻译
Pavement Distress Recognition (PDR) is an important step in pavement inspection and can be powered by image-based automation to expedite the process and reduce labor costs. Pavement images are often in high-resolution with a low ratio of distressed to non-distressed areas. Advanced approaches leverage these properties via dividing images into patches and explore discriminative features in the scale space. However, these approaches usually suffer from information loss during image resizing and low efficiency due to complex learning frameworks. In this paper, we propose a novel and efficient method for PDR. A light network named the Kernel Inversed Pyramidal Resizing Network (KIPRN) is introduced for image resizing, and can be flexibly plugged into the image classification network as a pre-network to exploit resolution and scale information. In KIPRN, pyramidal convolution and kernel inversed convolution are specifically designed to mine discriminative information across different feature granularities and scales. The mined information is passed along to the resized images to yield an informative image pyramid to assist the image classification network for PDR. We applied our method to three well-known Convolutional Neural Networks (CNNs), and conducted an evaluation on a large-scale pavement image dataset named CQU-BPDD. Extensive results demonstrate that KIPRN can generally improve the pavement distress recognition of these CNN models and show that the simple combination of KIPRN and EfficientNet-B3 significantly outperforms the state-of-the-art patch-based method in both performance and efficiency.
translated by 谷歌翻译
Many visualization techniques have been created to help explain the behavior of convolutional neural networks (CNNs), but they largely consist of static diagrams that convey limited information. Interactive visualizations can provide more rich insights and allow users to more easily explore a model's behavior; however, they are typically not easily reusable and are specific to a particular model. We introduce Visual Feature Search, a novel interactive visualization that is generalizable to any CNN and can easily be incorporated into a researcher's workflow. Our tool allows a user to highlight an image region and search for images from a given dataset with the most similar CNN features. It supports searching through large image datasets with an efficient cache-based search implementation. We demonstrate how our tool elucidates different aspects of model behavior by performing experiments on supervised, self-supervised, and human-edited CNNs. We also release a portable Python library and several IPython notebooks to enable researchers to easily use our tool in their own experiments. Our code can be found at https://github.com/lookingglasslab/VisualFeatureSearch.
translated by 谷歌翻译
最近,致力于通过现代机器学习方法预测脑部疾病的最新神经影像学研究通常包括单一模态并依靠监督的过度参数化模型。但是,单一模态仅提供了高度复杂的大脑的有限视图。至关重要的是,临床环境中的有监督模型缺乏用于培训的准确诊断标签。粗标签不会捕获脑疾病表型的长尾谱,这导致模型的普遍性丧失,从而使它们在诊断环境中的有用程度降低。这项工作提出了一个新型的多尺度协调框架,用于从多模式神经影像数据中学习多个表示。我们提出了一般的归纳偏见分类法,以捕获多模式自学融合中的独特和联合信息。分类法构成了一个无解码器模型的家族,具有降低的计算复杂性,并捕获多模式输入的本地和全局表示之间的多尺度关系。我们使用各种阿尔茨海默氏病表型中使用功能和结构磁共振成像(MRI)数据对分类法进行了全面评估,并表明自我监督模型揭示了与疾病相关的大脑区域和多模态链接,而无需在预先访问PRE-PRE-the PRE-the PRE-the PRE-the PRE-PRECTEN NICKES NOCKER NOCKER NOCKER NOCKER NOCKER NOCE访问。训练。拟议的多模式自学学习的学习能够表现出两种模式的分类表现。伴随的丰富而灵活的无监督的深度学习框架捕获了复杂的多模式关系,并提供了符合或超过更狭窄的监督分类分析的预测性能。我们提供了详尽的定量证据,表明该框架如何显着提高我们对复杂脑部疾病中缺失的联系的搜索。
translated by 谷歌翻译
对比度学习依赖于假设正对包含相关视图,例如,视频的图像或视频的共同发生的多峰信号,其共享关于实例的某些基础信息。但如果违反了这个假设怎么办?该文献表明,对比学学习在存在嘈杂的视图中产生次优表示,例如,没有明显共享信息的假正对。在这项工作中,我们提出了一种新的对比损失函数,这是对嘈杂的观点的强大。我们通过显示嘈杂二进制分类的强大对称损失的连接提供严格的理论理由,并通过基于Wassersein距离测量来建立新的对比界限进行新的对比。拟议的损失是完全的方式无话无双,并且对Innoconce损失的更换简单的替代品,这使得适用于现有的对比框架。我们表明,我们的方法提供了在展示各种现实世界噪声模式的图像,视频和图形对比学习基准上的一致性改进。
translated by 谷歌翻译
直觉上,人们所期望的训练的神经网络对测试样本进行相关预测与如何密集的该样本是由表示太空中看到的训练样本包围的准确性。在这项工作中,我们提供了理论依据和支持这一假设的实验。我们提出了一种误差函数为分段线性,需要一个局部区域中的网络的输入空间,并输出平滑经验训练误差,这是一个从平均通过网络表示距离加权其他区域经验训练误差的神经网络。甲绑定在预期平滑误差为每个区域尺度成反比地表示空间训练样本密度。根据经验,我们验证这个边界是网络的预测上测试样品不准确的一个强有力的预测。对于看不见的测试设备,包括那些外的分布样本,通过结合当地区域的错误排名测试样品和最高界限丢弃样品提高了20%的绝对数字来看,对图像分类数据集的预测精度。
translated by 谷歌翻译
We propose an approach to self-supervised representation learning based on maximizing mutual information between features extracted from multiple views of a shared context. For example, one could produce multiple views of a local spatiotemporal context by observing it from different locations (e.g., camera positions within a scene), and via different modalities (e.g., tactile, auditory, or visual). Or, an ImageNet image could provide a context from which one produces multiple views by repeatedly applying data augmentation. Maximizing mutual information between features extracted from these views requires capturing information about high-level factors whose influence spans multiple views -e.g., presence of certain objects or occurrence of certain events. Following our proposed approach, we develop a model which learns image representations that significantly outperform prior methods on the tasks we consider. Most notably, using self-supervised learning, our model learns representations which achieve 68.1% accuracy on Im-ageNet using standard linear evaluation. This beats prior results by over 12% and concurrent results by 7%. When we extend our model to use mixture-based representations, segmentation behaviour emerges as a natural side-effect. Our code is available online: https://github.com/Philip-Bachman/amdim-public.
translated by 谷歌翻译
We present Deep Graph Infomax (DGI), a general approach for learning node representations within graph-structured data in an unsupervised manner. DGI relies on maximizing mutual information between patch representations and corresponding high-level summaries of graphs-both derived using established graph convolutional network architectures. The learnt patch representations summarize subgraphs centered around nodes of interest, and can thus be reused for downstream node-wise learning tasks. In contrast to most prior approaches to unsupervised learning with GCNs, DGI does not rely on random walk objectives, and is readily applicable to both transductive and inductive learning setups. We demonstrate competitive performance on a variety of node classification benchmarks, which at times even exceeds the performance of supervised learning.
translated by 谷歌翻译
This work investigates unsupervised learning of representations by maximizing mutual information between an input and the output of a deep neural network encoder. Importantly, we show that structure matters: incorporating knowledge about locality in the input into the objective can significantly improve a representation's suitability for downstream tasks. We further control characteristics of the representation by matching to a prior distribution adversarially. Our method, which we call Deep InfoMax (DIM), outperforms a number of popular unsupervised learning methods and compares favorably with fully-supervised learning on several classification tasks in with some standard architectures. DIM opens new avenues for unsupervised learning of representations and is an important step towards flexible formulations of representation learning objectives for specific end-goals.
translated by 谷歌翻译